113 research outputs found

    Complex Systems Science and Brain Dynamics

    Get PDF
    Brain systems with their complex and temporally intricate dynamics have been difficult to unravel and comprehend. While great advances have been made in understanding genetics, neural behavior, gray versus white matter and synaptic plasticity, it remains a particular challenge to understand how human diseases and disorders develop from internal neural level irregularities, e.g., in channels, membranes and mutations before they lead to an observable disease. The field of system biology has advanced significantly, giving rise to high expectations of tying separate biological phenomena into more expansive rational systems. Denis Noble, a pioneer of systems biology, who developed the first viable mathematical model of the working heart in 1960, has been influential in calling the community to focus on creating computer and mathematical models of organic life to interpret functions from the molecular level to that of the whole organism (Noble, 2006). Our approach to modelin

    Discontinuities in recurrent neural networks

    Get PDF
    This paper studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net-function and a sigmoid-like continuous activation-function. The authors introducePostprint (published version

    Multiscale Agent-based Model of Tumor Angiogenesis

    Get PDF
    AbstractComputational models of cancer complement the biological study of tumor growth. However, existing modeling approaches can be both inefficient and inaccurate due to the difficulties of representing the complex interactions between cells and tissues. We present a three-dimensional multiscale agent-based model of tumor growth with angiogenesis. The model is designed to easily adapt to various cancer types, although we focus on breast cancer. It includes cellular (genetic control), tissue (cells, blood vessels, angiogenesis), and molecular (VEGF, diffusion) levels of representation. Unlike in most cancer models, both normally functioning tissue cells and tumor cells are included in the model. Tumors grow following the expected spheroid cluster pattern, with growth limited by available oxygen. Angiogenesis, the process by which tumors may encourage new vessel growth for nutrient diffusion, is modeled with a new discrete approach that we propose will decrease computational cost. Our results show that despite proposing these new abstractions, we see similar results to previously accepted angiogenesis models. This may indicate that a more discrete approach should be considered by modelers in the future

    Probabilistic analysis of a differential equation for linear programming

    Full text link
    In this paper we address the complexity of solving linear programming problems with a set of differential equations that converge to a fixed point that represents the optimal solution. Assuming a probabilistic model, where the inputs are i.i.d. Gaussian variables, we compute the distribution of the convergence rate to the attracting fixed point. Using the framework of Random Matrix Theory, we derive a simple expression for this distribution in the asymptotic limit of large problem size. In this limit, we find that the distribution of the convergence rate is a scaling function, namely it is a function of one variable that is a combination of three parameters: the number of variables, the number of constraints and the convergence rate, rather than a function of these parameters separately. We also estimate numerically the distribution of computation times, namely the time required to reach a vicinity of the attracting fixed point, and find that it is also a scaling function. Using the problem size dependence of the distribution functions, we derive high probability bounds on the convergence rates and on the computation times.Comment: 1+37 pages, latex, 5 eps figures. Version accepted for publication in the Journal of Complexity. Changes made: Presentation reorganized for clarity, expanded discussion of measure of complexity in the non-asymptotic regime (added a new section

    Unsupervised Learning with Self-Organizing Spiking Neural Networks

    Full text link
    We present a system comprising a hybridization of self-organized map (SOM) properties with spiking neural networks (SNNs) that retain many of the features of SOMs. Networks are trained in an unsupervised manner to learn a self-organized lattice of filters via excitatory-inhibitory interactions among populations of neurons. We develop and test various inhibition strategies, such as growing with inter-neuron distance and two distinct levels of inhibition. The quality of the unsupervised learning algorithm is evaluated using examples with known labels. Several biologically-inspired classification tools are proposed and compared, including population-level confidence rating, and n-grams using spike motif algorithm. Using the optimal choice of parameters, our approach produces improvements over state-of-art spiking neural networks

    Energy-based General Sequential Episodic Memory Networks at the Adiabatic Limit

    Full text link
    The General Associative Memory Model (GAMM) has a constant state-dependant energy surface that leads the output dynamics to fixed points, retrieving single memories from a collection of memories that can be asynchronously preloaded. We introduce a new class of General Sequential Episodic Memory Models (GSEMM) that, in the adiabatic limit, exhibit temporally changing energy surface, leading to a series of meta-stable states that are sequential episodic memories. The dynamic energy surface is enabled by newly introduced asymmetric synapses with signal propagation delays in the network's hidden layer. We study the theoretical and empirical properties of two memory models from the GSEMM class, differing in their activation functions. LISEM has non-linearities in the feature layer, whereas DSEM has non-linearity in the hidden layer. In principle, DSEM has a storage capacity that grows exponentially with the number of neurons in the network. We introduce a learning rule for the synapses based on the energy minimization principle and show it can learn single memories and their sequential relationships online. This rule is similar to the Hebbian learning algorithm and Spike-Timing Dependent Plasticity (STDP), which describe conditions under which synapses between neurons change strength. Thus, GSEMM combines the static and dynamic properties of episodic memory under a single theoretical framework and bridges neuroscience, machine learning, and artificial intelligence

    Forward Signal Propagation Learning

    Full text link
    We propose a new learning algorithm for propagating a learning signal and updating neural network parameters via a forward pass, as an alternative to backpropagation. In forward signal propagation learning (sigprop), there is only the forward path for learning and inference, so there are no additional structural or computational constraints on learning, such as feedback connectivity, weight transport, or a backward pass, which exist under backpropagation. Sigprop enables global supervised learning with only a forward path. This is ideal for parallel training of layers or modules. In biology, this explains how neurons without feedback connections can still receive a global learning signal. In hardware, this provides an approach for global supervised learning without backward connectivity. Sigprop by design has better compatibility with models of learning in the brain and in hardware than backpropagation and alternative approaches to relaxing learning constraints. We also demonstrate that sigprop is more efficient in time and memory than they are. To further explain the behavior of sigprop, we provide evidence that sigprop provides useful learning signals in context to backpropagation. To further support relevance to biological and hardware learning, we use sigprop to train continuous time neural networks with Hebbian updates and train spiking neural networks without surrogate functions
    • …
    corecore